Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 4.295
Filter
Add filters

Year range
1.
Proceedings of SPIE - The International Society for Optical Engineering ; 12602, 2023.
Article in English | Scopus | ID: covidwho-20245409

ABSTRACT

Nowadays, with the outbreak of COVID-19, the prevention and treatment of COVID-19 has gradually become the focus of social disease prevention, and most patients are also more concerned about the symptoms. COVID-19 has symptoms similar to the common cold, and it cannot be diagnosed based on the symptoms shown by the patient, so it is necessary to observe medical images of the lungs to finally determine whether they are COVID-19 positive. As the number of patients with symptoms similar to pneumonia increases, more and more medical images of the lungs need to be generated. At the same time, the number of physicians at this stage is far from meeting the needs of patients, resulting in patients unable to detect and understand their own conditions in time. In this regard, we have performed image augmentation, data cleaning, and designed a deep learning classification network based on the data set of COVID-19 lung medical images. accurate classification judgment. The network can achieve 95.76% classification accuracy for this task through a new fine-tuning method and hyperparameter tuning we designed, which has higher accuracy and less training time than the classic convolutional neural network model. © 2023 SPIE.

2.
Tien Tzu Hsueh Pao/Acta Electronica Sinica ; 51(1):202-212, 2023.
Article in Chinese | Scopus | ID: covidwho-20245323

ABSTRACT

The COVID-19 (corona virus disease 2019) has caused serious impacts worldwide. Many scholars have done a lot of research on the prevention and control of the epidemic. The diagnosis of COVID-19 by cough is non-contact, low-cost, and easy-access, however, such research is still relatively scarce in China. Mel frequency cepstral coefficients (MFCC) feature can only represent the static sound feature, while the first-order differential MFCC feature can also reflect the dynamic feature of sound. In order to better prevent and treat COVID-19, the paper proposes a dynamic-static dual input deep neural network algorithm for diagnosing COVID-19 by cough. Based on Coswara dataset, cough audio is clipped, MFCC and first-order differential MFCC features are extracted, and a dynamic and static feature dual-input neural network model is trained. The model adopts a statistic pooling layer so that different length of MFCC features can be input. The experiment results show the proposed algorithm can significantly improve the recognition accuracy, recall rate, specificity, and F1-score compared with the existing models. © 2023 Chinese Institute of Electronics. All rights reserved.

3.
2022 IEEE Information Technologies and Smart Industrial Systems, ITSIS 2022 ; 2022.
Article in English | Scopus | ID: covidwho-20245166

ABSTRACT

The World Health Organization has labeled the novel coronavirus illness (COVID-19) a pandemic since March 2020. It's a new viral infection with a respiratory tropism that could lead to atypical pneumonia. Thus, according to experts, early detection of the positive cases with people infected by the COVID-19 virus is highly needed. In this manner, patients will be segregated from other individuals, and the infection will not spread. As a result, developing early detection and diagnosis procedures to enable a speedy treatment process and stop the transmission of the virus has become a focus of research. Alternative early-screening approaches have become necessary due to the time-consuming nature of the current testing methodology such as Reverse transcription polymerase chain reaction (RT-PCR) test. The methods for detecting COVID-19 using deep learning (DL) algorithms using sound modality, which have become an active research area in recent years, have been thoroughly reviewed in this work. Although the majority of the newly proposed methods are based on medical images (i.e. X-ray and CT scans), we show in this comprehensive survey that the sound modality can be a good alternative to these methods, providing faster and easiest way to create a database with a high performance. We also present the most popular sound databases proposed for COVID-19 detection. © 2022 IEEE.

4.
Applied Sciences ; 13(11):6515, 2023.
Article in English | ProQuest Central | ID: covidwho-20244877

ABSTRACT

With the advent of the fourth industrial revolution, data-driven decision making has also become an integral part of decision making. At the same time, deep learning is one of the core technologies of the fourth industrial revolution that have become vital in decision making. However, in the era of epidemics and big data, the volume of data has increased dramatically while the sources have become progressively more complex, making data distribution highly susceptible to change. These situations can easily lead to concept drift, which directly affects the effectiveness of prediction models. How to cope with such complex situations and make timely and accurate decisions from multiple perspectives is a challenging research issue. To address this challenge, we summarize concept drift adaptation methods under the deep learning framework, which is beneficial to help decision makers make better decisions and analyze the causes of concept drift. First, we provide an overall introduction to concept drift, including the definition, causes, types, and process of concept drift adaptation methods under the deep learning framework. Second, we summarize concept drift adaptation methods in terms of discriminative learning, generative learning, hybrid learning, and others. For each aspect, we elaborate on the update modes, detection modes, and adaptation drift types of concept drift adaptation methods. In addition, we briefly describe the characteristics and application fields of deep learning algorithms using concept drift adaptation methods. Finally, we summarize common datasets and evaluation metrics and present future directions.

5.
Progress in Biomedical Optics and Imaging - Proceedings of SPIE ; 12467, 2023.
Article in English | Scopus | ID: covidwho-20244646

ABSTRACT

It is important to evaluate medical imaging artificial intelligence (AI) models for possible implicit discrimination (ability to distinguish between subgroups not related to the specific clinical task of the AI model) and disparate impact (difference in outcome rate between subgroups). We studied potential implicit discrimination and disparate impact of a published deep learning/AI model for the prediction of ICU admission for COVID-19 within 24 hours of imaging. The IRB-approved, HIPAA-compliant dataset contained 8,357 chest radiography exams from February 2020-January 2022 (12% ICU admission within 24 hours) and was separated by patient into training, validation, and test sets (64%, 16%, 20% split). The AI output was evaluated in two demographic categories: sex assigned at birth (subgroups male and female) and self-reported race (subgroups Black/African-American and White). We failed to show statistical evidence that the model could implicitly discriminate between members of subgroups categorized by race based on prediction scores (area under the receiver operating characteristic curve, AUC: median [95% confidence interval, CI]: 0.53 [0.48, 0.57]) but there was some marginal evidence of implicit discrimination between members of subgroups categorized by sex (AUC: 0.54 [0.51, 0.57]). No statistical evidence for disparate impact (DI) was observed between the race subgroups (i.e. the 95% CI of the ratio of the favorable outcome rate between two subgroups included one) for the example operating point of the maximized Youden index but some evidence of disparate impact to the male subgroup based on sex was observed. These results help develop evaluation of implicit discrimination and disparate impact of AI models in the context of decision thresholds © COPYRIGHT SPIE. Downloading of the is permitted for personal use only.

6.
Journal of Computational Biophysics & Chemistry ; : 1-19, 2023.
Article in English | Academic Search Complete | ID: covidwho-20244584

ABSTRACT

Topological data analysis (TDA) is an emerging field in mathematics and data science. Its central technique, persistent homology, has had tremendous success in many science and engineering disciplines. However, persistent homology has limitations, including its inability to handle heterogeneous information, such as multiple types of geometric objects;being qualitative rather than quantitative, e.g., counting a 5-member ring the same as a 6-member ring, and a failure to describe nontopological changes, such as homotopic changes in protein–protein binding. Persistent topological Laplacians (PTLs), such as persistent Laplacian and persistent sheaf Laplacian, were proposed to overcome the limitations of persistent homology. In this work, we examine the modeling and analysis power of PTLs in the study of the protein structures of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) spike receptor binding domain (RBD). First, we employ PTLs to study how the RBD mutation-induced structural changes of RBD-angiotensin-converting enzyme 2 (ACE2) binding complexes are captured in the changes of spectra of the PTLs among SARS-CoV-2 variants. Additionally, we use PTLs to analyze the binding of RBD and ACE2-induced structural changes of various SARS-CoV-2 variants. Finally, we explore the impacts of computationally generated RBD structures on a topological deep learning paradigm and predictions of deep mutational scanning datasets for the SARS-CoV-2 Omicron BA.2 variant. Our results indicate that PTLs have advantages over persistent homology in analyzing protein structural changes and provide a powerful new TDA tool for data science. [ FROM AUTHOR] Copyright of Journal of Computational Biophysics & Chemistry is the property of World Scientific Publishing Company and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full . (Copyright applies to all s.)

7.
Cancer Research Conference: American Association for Cancer Research Annual Meeting, ACCR ; 83(7 Supplement), 2023.
Article in English | EMBASE | ID: covidwho-20244501

ABSTRACT

Background: In the field of antibody engineering, an essential task is to design a novel antibody whose paratopes bind to a specific antigen with correct epitopes. Understanding antibody structure and its paratope can facilitate a mechanistic understanding of its function. Therefore, antibody structure prediction from its sequence alone has always been a highly valuable problem for de novo antibody design. AlphaFold2 (AF2), a breakthrough in the field of structural biology, provides a solution to this protein structure prediction problem by learning a deep learning model. However, the computational efficiency and undesirable prediction accuracy on antibody, especially on the complementarity-determining regions limit its applications in de novo antibody design. Method(s): To learn informative representation of antibodies, we trained a deep antibody language model (ALM) on curated sequences from observed antibody space database via a well-designed transformer model. We also developed a novel model named xTrimoABFold++ to predict antibody structure from antibody sequence only based on the pretrained ALM as well as efficient evoformers and structural modules. The model was trained end-to-end on the antibody structures in PDB by minimizing the ensemble loss of domain-specific focal loss on CDR and the frame aligned point loss. Result(s): xTrimoABFold++ outperforms AF2 and OmegaFold, HelixFold-Single with 30+% improvement on RMSD. Also, it is 151 times faster than AF2 and predicts antibody structure in atomic accuracy within 20 seconds. In recently released antibodies, for example, cemiplimab of PD1 (PDB: 7WVM) and cross-neutralizing antibody 6D6 of SARS-CoV-2 (PDB: 7EAN), the RMSD of xTrimoABFold++ are 0.344 and 0.389 respectively. Conclusion(s): To the best of our knowledge, xTrimoABFold++ achieved the state-of-the-art in antibody structure prediction. Its improvement on both accuracy and efficiency makes it a valuable tool for de novo antibody design, and could make further improvement in immuno-theory.

8.
ACM Web Conference 2023 - Proceedings of the World Wide Web Conference, WWW 2023 ; : 3592-3602, 2023.
Article in English | Scopus | ID: covidwho-20244490

ABSTRACT

We study the behavior of an economic platform (e.g., Amazon, Uber Eats, Instacart) under shocks, such as COVID-19 lockdowns, and the effect of different regulation considerations. To this end, we develop a multi-agent simulation environment of a platform economy in a multi-period setting where shocks may occur and disrupt the economy. Buyers and sellers are heterogeneous and modeled as economically-motivated agents, choosing whether or not to pay fees to access the platform. We use deep reinforcement learning to model the fee-setting and matching behavior of the platform, and consider two major types of regulation frameworks: (1) taxation policies and (2) platform fee restrictions. We offer a number of simulated experiments that cover different market settings and shed light on regulatory tradeoffs. Our results show that while many interventions are ineffective with a sophisticated platform actor, we identify a particular kind of regulation - fixing fees to the optimal, no-shock fees while still allowing a platform to choose how to match buyers and sellers - as holding promise for promoting the efficiency and resilience of the economic system. © 2023 ACM.

9.
ACM International Conference Proceeding Series ; 2022.
Article in English | Scopus | ID: covidwho-20244307

ABSTRACT

This paper proposes a deep learning-based approach to detect COVID-19 infections in lung tissues from chest Computed Tomography (CT) images. A two-stage classification model is designed to identify the infection from CT scans of COVID-19 and Community Acquired Pneumonia (CAP) patients. The proposed neural model named, Residual C-NiN uses a modified convolutional neural network (CNN) with residual connections and a Network-in-Network (NiN) architecture for COVID-19 and CAP detection. The model is trained with the Signal Processing Grand Challenge (SPGC) 2021 COVID dataset. The proposed neural model achieves a slice-level classification accuracy of 93.54% on chest CT images and patient-level classification accuracy of 86.59% with class-wise sensitivity of 92.72%, 55.55%, and 95.83% for COVID-19, CAP, and Normal classes, respectively. Experimental results show the benefit of adding NiN and residual connections in the proposed neural architecture. Experiments conducted on the dataset show significant improvement over the existing state-of-the-art methods reported in the literature. © 2022 ACM.

10.
2023 3rd International Conference on Advances in Electrical, Computing, Communication and Sustainable Technologies, ICAECT 2023 ; 2023.
Article in English | Scopus | ID: covidwho-20244302

ABSTRACT

Healthcare systems all over the world are strained as the COVID-19 pandemic's spread becomes more widespread. The only realistic strategy to avoid asymptomatic transmission is to monitor social distance, as there are no viable medical therapies or vaccinations for it. A unique computer vision-based framework that uses deep learning is to analyze the images that are needed to measure social distance. This technique uses the key point regressor to identify the important feature points utilizing the Visual Geometry Group (VGG19) which is a standard Convolutional Neural Network (CNN) architecture having multiple layers, MobileNetV2 which is a computer vision network that advances the-state-of-art for mobile visual identification, including semantic segmentation, classification and object identification. VGG19 and MobileNetV2 were trained on the Kaggle dataset. The border boxes for the item may be seen as well as the crowd is sizeable, and red identified faces are then analyzed by MobileNetV2 to detect whether the person is wearing a mask or not. The distance between the observed people has been calculated using the Euclidian distance. Pretrained models like (You only look once) YOLOV3 which is a real-time object detection system, RCNN, and Resnet50 are used in our embedded vision system environment to identify social distance on images. The framework YOLOV3 performs an overall accuracy of 95% using transfer learning technique runs in 22ms which is four times fast than other predefined models. In the proposed model we achieved an accuracy of 96.67% using VGG19 and 98.38% using MobileNetV2, this beats all other models in its ability to estimate social distance and face mask. © 2023 IEEE.

11.
Electronics ; 12(11):2378, 2023.
Article in English | ProQuest Central | ID: covidwho-20244207

ABSTRACT

This paper presents a control system for indoor safety measures using a Faster R-CNN (Region-based Convolutional Neural Network) architecture. The proposed system aims to ensure the safety of occupants in indoor environments by detecting and recognizing potential safety hazards in real time, such as capacity control, social distancing, or mask use. Using deep learning techniques, the system detects these situations to be controlled, notifying the person in charge of the company if any of these are violated. The proposed system was tested in a real teaching environment at Rey Juan Carlos University, using Raspberry Pi 4 as a hardware platform together with an Intel Neural Stick board and a pair of PiCamera RGB (Red Green Blue) cameras to capture images of the environment and a Faster R-CNN architecture to detect and classify objects within the images. To evaluate the performance of the system, a dataset of indoor images was collected and annotated for object detection and classification. The system was trained using this dataset, and its performance was evaluated based on precision, recall, and F1 score. The results show that the proposed system achieved a high level of accuracy in detecting and classifying potential safety hazards in indoor environments. The proposed system includes an efficiently implemented software infrastructure to be launched on a low-cost hardware platform, which is affordable for any company, regardless of size or revenue, and it has the potential to be integrated into existing safety systems in indoor environments such as hospitals, warehouses, and factories, to provide real-time monitoring and alerts for safety hazards. Future work will focus on enhancing the system's robustness and scalability to larger indoor environments with more complex safety hazards.

12.
Proceedings of SPIE - The International Society for Optical Engineering ; 12567, 2023.
Article in English | Scopus | ID: covidwho-20244192

ABSTRACT

The COVID-19 pandemic has challenged many of the healthcare systems around the world. Many patients who have been hospitalized due to this disease develop lung damage. In low and middle-income countries, people living in rural and remote areas have very limited access to adequate health care. Ultrasound is a safe, portable and accessible alternative;however, it has limitations such as being operator-dependent and requiring a trained professional. The use of lung ultrasound volume sweep imaging is a potential solution for this lack of physicians. In order to support this protocol, image processing together with machine learning is a potential methodology for an automatic lung damage screening system. In this paper we present an automatic detection of lung ultrasound artifacts using a Deep Neural Network, identifying clinical relevant artifacts such as pleural and A-lines contained in the ultrasound examination taken as part of the clinical screening in patients with suspected lung damage. The model achieved encouraging preliminary results such as sensitivity of 94%, specificity of 81%, and accuracy of 89% to identify the presence of A-lines. Finally, the present study could result in an alternative solution for an operator-independent lung damage screening in rural areas, leading to the integration of AI-based technology as a complementary tool for healthcare professionals. © 2023 SPIE.

13.
Decision Making: Applications in Management and Engineering ; 6(1):502-534, 2023.
Article in English | Scopus | ID: covidwho-20244096

ABSTRACT

The COVID-19 pandemic has caused the death of many people around the world and has also caused economic problems for all countries in the world. In the literature, there are many studies to analyze and predict the spread of COVID-19 in cities and countries. However, there is no study to predict and analyze the cross-country spread in the world. In this study, a deep learning based hybrid model was developed to predict and analysis of COVID-19 cross-country spread and a case study was carried out for Emerging Seven (E7) and Group of Seven (G7) countries. It is aimed to reduce the workload of healthcare professionals and to make health plans by predicting the daily number of COVID-19 cases and deaths. Developed model was tested extensively using Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE) and R Squared (R2). The experimental results showed that the developed model was more successful to predict and analysis of COVID-19 cross-country spread in E7 and G7 countries than Linear Regression (LR), Random Forest (RF), Support Vector Machine (SVM), Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM). The developed model has R2 value close to 0.9 in predicting the number of daily cases and deaths in the majority of E7 and G7 countries. © 2023 by the authors.

14.
IEEE Transactions on Radiation and Plasma Medical Sciences ; : 1-1, 2023.
Article in English | Scopus | ID: covidwho-20244069

ABSTRACT

Automatic lung infection segmentation in computed tomography (CT) scans can offer great assistance in radiological diagnosis by improving accuracy and reducing time required for diagnosis. The biggest challenges for deep learning (DL) models in segmenting infection region are the high variances in infection characteristics, fuzzy boundaries between infected and normal tissues, and the troubles in getting large number of annotated data for training. To resolve such issues, we propose a Modified U-Net (Mod-UNet) model with minor architectural changes and significant modifications in the training process of vanilla 2D UNet. As part of these modifications, we updated the loss function, optimization function, and regularization methods, added a learning rate scheduler and applied advanced data augmentation techniques. Segmentation results on two Covid-19 Lung CT segmentation datasets show that the performance of Mod-UNet is considerably better than the baseline U-Net. Furthermore, to mitigate the issue of lack of annotated data, the Mod-UNet is used in a semi-supervised framework (Semi-Mod-UNet) which works on a random sampling approach to progressively enlarge the training dataset from a large pool of unannotated CT slices. Exhaustive experiments on the two Covid-19 CT segmentation datasets and on a real lung CT volume show that the Mod-UNet and Semi-Mod-UNet significantly outperform other state-of-theart approaches in automated lung infection segmentation. IEEE

15.
Proceedings - IEEE International Conference on Device Intelligence, Computing and Communication Technologies, DICCT 2023 ; : 401-405, 2023.
Article in English | Scopus | ID: covidwho-20244068

ABSTRACT

COVID-19 virus spread very rapidly if we come in contact to the other person who is infected, this was treated as acute pandemic. As per the data available at WHO more than 663 million infected cases reported and 6.7 million deaths are confirmed worldwide till Dec, 2022. On the basis of this big reported number, we can say that ignorance can cause harm to the people worldwide. Most of the people are vaccinated now but as per standard guideline of WHO social distancing is best practiced to avoid spreading of COVID-19 variants. This is difficult to monitor manually by analyzing the persons live cameras feed. Therefore, there is a need to develop an automated Artificial Intelligence based System that detects and track humans for monitoring. To accomplish this task, many deep learning models have been proposed to calculate distance among each pair of human objects detected in each frame. This paper presents an efficient deep learning monitoring system by considering distance as well as velocity of the object detected to avoid each frame processing to improve the computation complexity in term of frames/second. The detected human object closer to some allowed limit (1m) marked by red color and all other object marked with green color. The comparison of with and without direction consideration is presented and average efficiency found 20.08 FPS (frame/Second) and 22.98 FPS respectively, which is 14.44% faster as well as preserve the accuracy of detection. © 2023 IEEE.

16.
Journal of Information Systems Engineering and Business Intelligence ; 9(1):84-94, 2023.
Article in English | Scopus | ID: covidwho-20244034

ABSTRACT

Background: During the Covid-19 period, the government made policies dealing with it. Policies issued by the government invited public opinion as a form of public reaction to these policies. The easiest way to find out the public's response is through Twitter's social media. However, Twitter data have limitations. There is a mix between facts and personal opinions. It is necessary to distinguish between these. Opinions expressed by the public can be both positive and negative, so correlation is needed to link opinions and their emotions. Objective: This study discusses sentiment and emotion detection to understand public opinion accurately. Sentiment and emotion are analyzed using Pearson correlation to determine the correlation. Methods: The datasets were about public opinion of Covid-19 retrieved from Twitter. The data were annotated into sentiment and emotion using Pearson correlation. After the annotation process, the data were preprocessed. Afterward, single model classification was carried out using machine learning methods (Support Vector Machine, Random Forest, Naïve Bayes) and deep learning method (Bidirectional Encoder Representation from Transformers). The classification process was focused on accuracy and F1-score evaluation. Results: There were three scenarios for determining sentiment and emotion, namely the factor of aspect-based and correlation- based, without those factors, and aspect-based sentiment only. The scenario using the two aforementioned factors obtained an accuracy value of 97%, while an accuracy of 96% was acquired without them. Conclusion: The use of aspect and correlation with Pearson correlation has helped better understand public opinion regarding sentiment and emotion more accurately © 2023 The Authors. Published by Universitas Airlangga.

17.
IEEE Access ; : 1-1, 2023.
Article in English | Scopus | ID: covidwho-20243873

ABSTRACT

As intelligent driving vehicles came out of concept into people’s life, the combination of safe driving and artificial intelligence becomes the new direction of future transportation development. Autonomous driving technology is developing based on control algorithms and model recognitions. In this paper, a cloud-based interconnected multi-sensor fusion autonomous vehicle system is proposed that uses deep learning (YOLOv4) and improved ORB algorithms to identify pedestrians, vehicles, and various traffic signs. A cloud-based interactive system is built to enable vehicle owners to master the situation of their vehicles at any time. In order to meet multiple application of automatic driving vehicles, the environment perception technology of multi-sensor fusion processing has broadened the uses of automatic driving vehicles by being equipped with automatic speech recognition (ASR), vehicle following mode and road patrol mode. These functions enable automatic driving to be used in applications such as agricultural irrigation, road firefighting and contactless delivery under new coronavirus outbreaks. Finally, using the embedded system equipment, an intelligent car was built for experimental verification, and the overall recognition accuracy of the system was over 96%. Author

18.
ACM International Conference Proceeding Series ; 2022.
Article in English | Scopus | ID: covidwho-20243833

ABSTRACT

The COVID-19 pandemic still affects most parts of the world today. Despite a lot of research on diagnosis, prognosis, and treatment, a big challenge today is the limited number of expert radiologists who provide diagnosis and prognosis on X-Ray images. Thus, to make the diagnosis of COVID-19 accessible and quicker, several researchers have proposed deep-learning-based Artificial Intelligence (AI) models. While most of these proposed machine and deep learning models work in theory, they may not find acceptance among the medical community for clinical use due to weak statistical validation. For this article, radiologists' views were considered to understand the correlation between the theoretical findings and real-life observations. The article explores Convolutional Neural Network (CNN) classification models to build a four-class viz. "COVID-19", "Lung Opacity", "Pneumonia", and "Normal"classifiers, which also provide the uncertainty measure associated with each class. The authors also employ various pre-processing techniques to enhance the X-Ray images for specific features. To address the issues of over-fitting while training, as well as to address the class imbalance problem in our dataset, we use Monte Carlo dropout and Focal Loss respectively. Finally, we provide a comparative analysis of the following classification models - ResNet-18, VGG-19, ResNet-152, MobileNet-V2, Inception-V3, and EfficientNet-V2, where we match the state-of-the-art results on the Open Benchmark Chest X-ray datasets, with a sensitivity of 0.9954, specificity of 0.9886, the precision of 0.9880, F1-score of 0.9851, accuracy of 0.9816, and receiver operating characteristic (ROC) of the area under the curve (AUC) of 0.9781 (ROC-AUC score). © 2022 ACM.

19.
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) ; 13741 LNCS:154-159, 2023.
Article in English | Scopus | ID: covidwho-20243449

ABSTRACT

Due to the recent COVID-19 pandemic, people tend to wear masks indoors and outdoors. Therefore, systems with face recognition, such as FaceID, showed a tendency of decline in accuracy. Consequently, many studies and research were held to improve the accuracy of the recognition system between masked faces. Most of them targeted to enhance dataset and restrained the models to get reasonable accuracies. However, not much research was held to explain the reasons for the enhancement of the accuracy. Therefore, we focused on finding an explainable reason for the improvement of the model's accuracy. First, we could see that the accuracy has actually increased after training with a masked dataset by 12.86%. Then we applied Explainable AI (XAI) to see whether the model has really focused on the regions of interest. Our approach showed through the generated heatmaps that difference in the data of the training models make difference in range of focus. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

20.
Proceedings of SPIE - The International Society for Optical Engineering ; 12587, 2023.
Article in English | Scopus | ID: covidwho-20243426

ABSTRACT

With the outbreak of covid-19 in 2020, timely and effective diagnosis and treatment of each covid-19 patient is particularly important. This paper combines the advantages of deep learning in image recognition, takes RESNET as the basic network framework, and carries out the experiment of improving the residual structure on this basis. It is tested on the open source new coronal chest radiograph data set, and the accuracy rate is 82.3%. Through a series of experiments, the training model has the advantages of good generalization, high accuracy and fast convergence. This paper proves the feasibility of the improved residual neural network in the diagnosis of covid-19. © 2023 SPIE.

SELECTION OF CITATIONS
SEARCH DETAIL